Identifying Hidden Variables from Context-Specific Independencies

نویسندگان

  • Manon J. Sanscartier
  • Eric Neufeld
چکیده

Learning a Bayesian network from data is a model specific task, and thus requires careful consideration of contextual information, namely, contextual independencies. In this paper, we study the role of hidden variables in learning causal models from data. We show how statistical methods can help us discover these hidden variables. We suggest hidden variables are wrongly ignored in inference, because they are context-specific. We show that contextual consideration can help us learn more about true causal relationships hidden in the data. We present a method for correcting models by finding hidden contextual variables, as well as a means for refinining the current, incomplete model.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Decomposable Log-linear Models Decomposable Log-linear Models

The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphical models, which can be characterized by a structured set of conditional independencies between some variables give...

متن کامل

Fast Restricted Causal Inference

Hidden variables are well known sources of disturbance when recovering belief networks from data based only on measurable variables. Hence models assuming existence of hidden variables are under development. This paper presents a new algorithm ”accelerating” the known CI algorithm of Spirtes, Glymour and Scheines [20]. We prove that this algorithm does not produces (conditional) independencies ...

متن کامل

The Hidden Life of Latent Variables: Bayesian Learning with Mixed Graph Models

Directed acyclic graphs (DAGs) have been widely used as a representation of conditional independence in machine learning and statistics. Moreover, hidden or latent variables are often an important component of graphical models. However, DAG models suffer from an important limitation: the family of DAGs is not closed under marginalization of hidden variables. This means that in general we cannot...

متن کامل

Identifying Independencies in Causal Graphs with Feedback

We show that the d-separation criterion constitutes a valid test for conditional independence relationships that are induced by feedback systems involving discrete variables.

متن کامل

Context-Specific Independence in Bayesian Networks

Bayesiannetworks provide a languagefor qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian networ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007